122 research outputs found
Reflection
Reflection is my animated graduate thesis film. The film is 6 minutes and 30 seconds long. The concept for the story was developed with a focus on character driven animation, acting and performance. I wanted to challenge myself animating a character type that I was not familiar with, something that I could not relate to on a personal level. I chose to animate a mother of 2 kids who is about 40 years old. I wanted her to go through a wide range of emotions throughout the film. I wanted it to be a silent film, so I could bring out all the thoughts and emotions running in her head through acting and really work on my animation skills.
To begin with, the mother realizes through a series of events that she is letting life slip away caught in the act of caring for her family. She is no longer cheerful, beautiful and the heart throb she used to be. As the story developed over time, I realized my character was evolving too. The story simply had to change and become more complex to allow for the exploration of the complex character of the mother.
The film has both 3D and 2D animation elements to it. A majority portion of the 3D production process was done in Autodesk Maya. The 2D parts were hand drawn animations done in TV Paint.
This paper outlines my film, from ideation to a finished product and the wonderful roller coaster ride that is part of the film making process
Co-Regularized Deep Representations for Video Summarization
Compact keyframe-based video summaries are a popular way of generating
viewership on video sharing platforms. Yet, creating relevant and compelling
summaries for arbitrarily long videos with a small number of keyframes is a
challenging task. We propose a comprehensive keyframe-based summarization
framework combining deep convolutional neural networks and restricted Boltzmann
machines. An original co-regularization scheme is used to discover meaningful
subject-scene associations. The resulting multimodal representations are then
used to select highly-relevant keyframes. A comprehensive user study is
conducted comparing our proposed method to a variety of schemes, including the
summarization currently in use by one of the most popular video sharing
websites. The results show that our method consistently outperforms the
baseline schemes for any given amount of keyframes both in terms of
attractiveness and informativeness. The lead is even more significant for
smaller summaries.Comment: Video summarization, deep convolutional neural networks,
co-regularized restricted Boltzmann machine
Egocentric Activity Recognition with Multimodal Fisher Vector
With the increasing availability of wearable devices, research on egocentric
activity recognition has received much attention recently. In this paper, we
build a Multimodal Egocentric Activity dataset which includes egocentric videos
and sensor data of 20 fine-grained and diverse activity categories. We present
a novel strategy to extract temporal trajectory-like features from sensor data.
We propose to apply the Fisher Kernel framework to fuse video and temporal
enhanced sensor features. Experiment results show that with careful design of
feature extraction and fusion algorithm, sensor data can enhance
information-rich video data. We make publicly available the Multimodal
Egocentric Activity dataset to facilitate future research.Comment: 5 pages, 4 figures, ICASSP 2016 accepte
- …